5 research outputs found

    Hardware-based Security for Virtual Trusted Platform Modules

    Full text link
    Virtual Trusted Platform modules (TPMs) were proposed as a software-based alternative to the hardware-based TPMs to allow the use of their cryptographic functionalities in scenarios where multiple TPMs are required in a single platform, such as in virtualized environments. However, virtualizing TPMs, especially virutalizing the Platform Configuration Registers (PCRs), strikes against one of the core principles of Trusted Computing, namely the need for a hardware-based root of trust. In this paper we show how strength of hardware-based security can be gained in virtual PCRs by binding them to their corresponding hardware PCRs. We propose two approaches for such a binding. For this purpose, the first variant uses binary hash trees, whereas the other variant uses incremental hashing. In addition, we present an FPGA-based implementation of both variants and evaluate their performance

    Behavior Compliance Control for More Trustworthy Computation Outsourcing

    Get PDF
    Computation outsourcing has become a hot topic in both academic research and industry. This is because of the benefits accompanied with outsourcing, such as cost reduction, focusing on core businesses and possibility for benefiting from modern payment models like the pay-per-use model. Unfortunately, outsourcing to potentially untrusted third parties' hosting platforms requires a lot of trust. Clients need assurance that the intended code was loaded and executed, and that the application behaves correctly and trustworthy at runtime. That is, techniques from Trusted Computing which are used to allow issuing evidence about the execution of binaries and reporting it to a challenger are not sufficient. Challengers are more interested in evidence which allows detecting misbehavior while the outsourced computation is running on the hosting platform. Another challenging issue is providing a secure data storage for collected evidence information. Such a secure data storage is provided by the Trusted Platform Module (TPM). In outsourcing scenarios where virtualizations technologies are applied, the use of virtual TPMs (vTPMs) comes into consideration. However, researcher identified some drawbacks and limitations of the use of TPMs. These problems include privacy and maintainability issues, problems with the sealing functionality and the high communication and management efforts. On the other hand, virtualizing TPMs, especially virutalizing the Platform Configuration Registers (PCRs), strikes against one of the core principles of Trusted Computing, namely the need for a hardware-based secure storage. In this thesis, we propose different approaches and architectures which can be used to mitigate the problems above. In particular, in the first part of our thesis we propose an approach called Behavior Compliance Control (BCC) to defines architectures to describe how the behavior of such outsourced computations is captured and controlled as well as how to judge the compliance of it compared to a trusted behavior model. We present approaches for two abstraction levels; one on a program code level and the other is on the level of abstract executable business processes. In the second part of this thesis, we propose approaches to solve the aforementioned problems related to TPMs and vTPMs, which are used as storage for evidence data collected as assurance for behavior compliance. In particular, we recognized that the use of the SHA-1 hash to measure system components requires maintenance of a large set of hashes of presumably trustworthy software; furthermore, during attestation, the full configuration of the platform is revealed. Thus, our approach shows how the use of chameleon hashes allows to mitigate the impact of these two problems. To increase the security of vTPM, we show in another approach how strength of hardware-based security can be gained in virtual PCRs by binding them to their corresponding hardware PCRs. We propose two approaches for such a binding. For this purpose, the first variant uses binary hash trees, whereas the other variant uses incremental hashing. We further provide implementations of the proposed approach and evaluate their impact in practice. Furthermore, we empirically evaluate the relative efficacy of the different behavioral abstractions of BCC that we define based on different real world applications. In particular, we examined the feasibility, the effectiveness, the scalability and efficiency of the approach. To this end, we chose two kinds of applications, a web-based and a desktop application, performing different attacks on them, such as malicious input attach and SQL injection attack. The results show that such attacks can be detected so that the application of our approach can increase the protection against them

    Group-Based Attestation: Enhancing Privacy and Management in Remote Attestation

    No full text
    One of the central aims of Trusted Computing is to provide the ability to attest that a remote platform is in a certain trustworthy state. While in principle this functionality can be achieved by the remote attestation process as standardized by the Trusted Computing Group, privacy and scalability problems make it difficult to realize in practice: In particular, the use of the SHA-1 hash to measure system components requires maintenance of a large set of hashes of presumably trustworthy software; furthermore, during attestation, the full configuration of the platform is revealed. In this paper we show how chameleon hashes allow to mitigate of these two problems. By using a prototypical implementation we furthermore show that the approach is feasible in practice

    Dynamic Anomaly Detection for More Trustworthy Outsourced Computation

    No full text
    A hybrid cloud combines a trusted private cloud with a public cloud owned by an untrusted cloud provider. This is problematic: When a hybrid cloud shifts computation from its private to its public part, it must trust the public part to execute the computation as intended. We show how public-cloud providers can use dynamic anomaly detection to increase their clients’ trust in outsourced computations. The client first defines the computation’s reference behavior by running an automated dynamic analysis in the private cloud. The cloud provider then generates an application profile when executing the outsourced computation for its client, persisted in tamper-proof storage. When in doubt, the client checks the profile against the recorded reference behavior. False positives are identified by re-executing the dubious computation in the trusted private cloud, and are used to re-fine the description of the reference behavior. The approach is fully automated. Using 3,000 harmless and 118 malicious inputs to different Java applications, we show that our approach is effective. In particular, different characterizations of behavior can yield anything from low numbers of false positives to low numbers of false negatives, effectively trading trustworthiness for computation cost in the private cloud
    corecore